10 research outputs found

    Translating Universal Scene Descriptions into Knowledge Graphs for Robotic Environment

    Full text link
    Robots performing human-scale manipulation tasks require an extensive amount of knowledge about their surroundings in order to perform their actions competently and human-like. In this work, we investigate the use of virtual reality technology as an implementation for robot environment modeling, and present a technique for translating scene graphs into knowledge bases. To this end, we take advantage of the Universal Scene Description (USD) format which is an emerging standard for the authoring, visualization and simulation of complex environments. We investigate the conversion of USD-based environment models into Knowledge Graph (KG) representations that facilitate semantic querying and integration with additional knowledge sources.Comment: 6 pages, 3 figures, ICRA 202

    Towards a Neuronally Consistent Ontology for Robotic Agents

    Full text link
    The Collaborative Research Center for Everyday Activity Science & Engineering (CRC EASE) aims to enable robots to perform environmental interaction tasks with close to human capacity. It therefore employs a shared ontology to model the activity of both kinds of agents, empowering robots to learn from human experiences. To properly describe these human experiences, the ontology will strongly benefit from incorporating characteristics of neuronal information processing which are not accessible from a behavioral perspective alone. We, therefore, propose the analysis of human neuroimaging data for evaluation and validation of concepts and events defined in the ontology model underlying most of the CRC projects. In an exploratory analysis, we employed an Independent Component Analysis (ICA) on functional Magnetic Resonance Imaging (fMRI) data from participants who were presented with the same complex video stimuli of activities as robotic and human agents in different environments and contexts. We then correlated the activity patterns of brain networks represented by derived components with timings of annotated event categories as defined by the ontology model. The present results demonstrate a subset of common networks with stable correlations and specificity towards particular event classes and groups, associated with environmental and contextual factors. These neuronal characteristics will open up avenues for adapting the ontology model to be more consistent with human information processing.Comment: Preprint of paper accepted for the European Conference on Artificial Intelligence (ECAI) 2023 (minor typo corrections

    FailRecOnt - An ontology-based framework for failure interpretation and recovery in planning and execution

    Get PDF
    Autonomous mobile robot manipulators have the potential to act as robot helpers at home to improve quality of life for various user populations, such as elderly or handicapped people, or to act as robot co-workers on factory floors, helping in assembly applications where collaborating with other operators may be required. However, robotic systems do not show robust performance when placed in environments that are not tightly controlled. An important cause of this is that failure handling often consists of scripted responses to foreseen complications, which leaves the robot vulnerable to new situations and ill-equipped to reason about failure and recovery strategies. Instead of libraries of hard-coded reactions that are expensive to develop and maintain, more sophisticated reasoning mechanisms are needed to handle failure. This requires an ontological characterization of what failure is, what concepts are useful to formulate causal explanations of failure, and integration with knowledge of available resources including the capabilities of the robot as well as those of other potential cooperative agents in the environment, e.g. a human user. We propose the FailRecOnt framework as a step in this direction. We have integrated an ontology for failure interpretation and recovery with a contingency-based task and motion planning framework such that a robot can deal with uncertainty, recover from failures, and deal with human-robot interactions. A motivating example has been introduced to justify this proposal. The proposal has been tested with a challenging scenarioPeer ReviewedPostprint (published version

    An Ontological Model of User Preferences

    Full text link
    The notion of preferences plays an important role in many disciplines including service robotics which is concerned with scenarios in which robots interact with humans. These interactions can be favored by robots taking human preferences into account. This raises the issue of how preferences should be represented to support such preference-aware decision making. Several formal accounts for a notion of preferences exist. However, these approaches fall short on defining the nature and structure of the options that a robot has in a given situation. In this work, we thus investigate a formal model of preferences where options are non-atomic entities that are defined by the complex situations they bring about

    An approach to ultra-tightly coupled data fusion for handheld input devices in robotic surgery

    No full text
    This paper introduces an ultra-tightly coupled approach to data fusion of optical and inertial measurements. The two redundant sensor systems complement each other well, with the cameras providing absolute positions and the inertial measurements giving low latency information of derivatives. A targeted application is the tracking of handheld input devices for robotic surgery, where landmarks are not always visible to all cameras. Especially when bi-manual operation is considered, where one hand can move between the other hand and a camera, occlusions occur frequently. The ultra-tighly coupled data fusion uses 2D-camera measurements to correct pose estimations in an extended Kalman filter without an explicit 3D-reconstruction. Therefore marker measurements are used to support the pose estimation, even if the marker is only visible in one camera. Experiments were done with an inertial measurement unit and rectified stereo cameras that show the advantage of the approach

    SkillMaN — A skill-based robotic manipulation framework based on perception and reasoning

    No full text
    One of the problems that service robotics deals with is to bring mobile manipulators to work in semi-structured human scenarios, which requires an efficient and flexible way to execute every-day tasks, like serve a cup in a cluttered environment. Usually, for those tasks, the combination of symbolic and geometric levels of planning is necessary, as well as the integration of perception models with knowledge to guide both planning levels, resulting in a sequence of actions or skills which, according to the current knowledge of the world, may be executed. This paper proposes a planning and execution framework, called SkillMaN, for robotic manipulation tasks, which is equipped with a module with experiential knowledge (learned from its experience or given by the user) on how to execute a set of skills, like pick-up, put-down or open a drawer, using workflows as well as robot trajectories. The framework also contains an execution assistant with geometric tools and reasoning capabilities to manage how to actually execute the sequence of motions to perform a manipulation task (which are forwarded to the executor module), as well as the capacity to store the relevant information to the experiential knowledge for further usage, and the capacity to interpret the actual perceived situation (in case the preconditions of an action do not hold) and to feed back the updated state to the planner to resume from there, allowing the robot to adapt to non-expected situations. To evaluate the viability of the proposed framework, an experiment has been proposed involving different skills performed with various types of objects in different scene contextsPeer ReviewedPreprin

    An ontology for failure interpretation in automated planning and execution

    No full text
    This is a post-peer-review, pre-copyedit version of an article published in ROBOT - Iberian Robotics Conference. The final authenticated version is available online at: http://dx.doi.org/10.1007/978-3-030-35990-4_31”.Autonomous indoor robots are supposed to accomplish tasks, like serve a cup, which involve manipulation actions, where task and motion planning levels are coupled. In both planning levels and execution phase, several source of failures can occur. In this paper, an interpretation ontology covering several sources of failures in automated planning and also during the execution phases is introduced with the purpose of working the planning more informed and the execution prepared for recovery. The proposed failure interpretation ontological module covers: (1) geometric failures, that may appear when e.g. the robot can not reach to grasp/place an object, there is no free-collision path or there is no feasible Inverse Kinematic (IK) solution. (2) hardware related failures that may appear when e.g. the robot in a real environment requires to be re-calibrated (gripper or arm), or it is sent to a non-reachable configuration. (3) software agent related failures, that may appear when e.g. the robot has software components that fail like when an algorithm is not able to extract the proper features. The paper describes the concepts and the implementation of failure interpretation ontology in several foundations like DUL and SUMO, and presents an example showing different situations in planning demonstrating the range of information the framework can provide for autonomous robotsPeer Reviewe

    Assembly planning in cluttered environments through heterogeneous reasoning

    No full text
    Assembly recipes can elegantly be represented in description logic theories. With such a recipe, the robot can figure out the next assembly step through logical inference. However, before performing an action, the robot needs to ensure various spatial constraints are met, such as that the parts to be put together are reachable, non occluded, etc. Such inferences are very complicated to support in logic theories, but specialized algorithms exist that efficiently compute qualitative spatial relations such as whether an object is reachable. In this work, we combine a logic-based planner for assembly tasks with geometric reasoning capabilities to enable robots to perform their tasks under spatial constraints. The geometric reasoner is integrated into the logic-based reasoning through decision procedures attached to symbols in the ontology.Peer Reviewe
    corecore